Goto

Collaborating Authors

 gebru and mitchell


Google may soon demo an AI Search chatbot amid pressure from ChatGPT

#artificialintelligence

It seems Google is feeling the heat from OpenAI's ChatGPT. The artificial intelligence-powered chatbot has taken the tech world by storm over the last couple months, as it can provide users with information they're looking for in an easy-to-understand format. Google sees ChatGPT as a threat to its search business and has shifted plans accordingly over the last several weeks, according to The New York Times. The report claims CEO Sundar Pichai has declared a "code red" and accelerated AI development. Google is reportedly preparing to show off at least 20 AI-powered products and a chatbot for its search engine this year, with at least some set to debut at its I/O conference in May.


Google may soon demo an AI Search chatbot amid pressure from ChatGPT

Engadget

It seems Google is feeling the heat from OpenAI's ChatGPT. The artificial intelligence-powered chatbot has taken the tech world by storm over the last couple months, as it can provide users with information they're looking for in an easy-to-understand format. Google sees ChatGPT as a threat to its search business and has shifted plans accordingly over the last several weeks, according to The New York Times. The report claims CEO Sundar Pichai has declared a "code red" and accelerated AI development. Google is reportedly preparing to show off at least 20 AI-powered products and a chatbot for its search engine this year, with at least some set to debut at its I/O conference in May.


Is AI good or bad – and who decides?

#artificialintelligence

One of the most frequently cited technology historians, Professor Melvin Kranzberg, was a major proponent of the law of unintended consequences. So much becomes obvious in his original 1986 paper, at the point where he expands on how he coined the first of his own Laws of Technology. "I mean that technology's interaction with the social ecology is such that technical developments frequently have environmental, social and human consequences that go far beyond the immediate purpose of the technical devices and practices themselves, and the same technology can have quite different results when introduced into different contexts or under different circumstances." Going further, Kranzberg observed that many technology-related problems arise when "apparently benign" technologies are introduced at scale. Kranzberg died in 1995 and, for him in his time, an example of this phenomenon was DDT – in one context, a pesticide with dangerous side effects; in another, an important weapon to curb the spread of malaria.


Google says it's committed to ethical AI research. Its ethical AI team isn't so sure.

#artificialintelligence

Six months after star AI ethics researcher Timnit Gebru said Google fired her over an academic paper scrutinizing a technology that powers some of the company's key products, the company says it's still deeply committed to ethical AI research. It promised to double its research staff studying responsible AI to 200 people, and CEO Sundar Pichai has pledged his support to fund more ethical AI projects. Jeff Dean, the company's head of AI, said in May that while the controversy surrounding Gebru's departure was a "reputational hit," it's time to move on. But some current members of Google's tightly knit ethical AI group told Recode the reality is different from the one Google executives are publicly presenting. The 10-person group, which studies how artificial intelligence impacts society, is a subdivision of Google's broader new responsible AI organization.


Google made AI language the centerpiece of I/O while ignoring its troubled past at the company

#artificialintelligence

Yesterday at Google's I/O developer conference, the company outlined ambitious plans for its future built on a foundation of advanced language AI. These systems, said Google CEO Sundar Pichai, will let users find information and organize their lives by having natural conversations with computers. All you need to do is speak, and the machine will answer. But for many in the AI community, there was a notable absence in this conversation: Google's response to its own research examining the dangers of such systems. In December 2020 and February 2021, Google first fired Timnit Gebru and then Margaret Mitchell, co-leads of its Ethical AI team. The story of their departure is complex but was triggered by a paper the pair co-authored (with researchers outside Google) examining risks associated with the language models Google now presents as key to its future.


Ethics of AI: Benefits and risks of artificial intelligence

#artificialintelligence

In 1949, at the dawn of the computer age, the French philosopher Gabriel Marcel warned of the danger of naively applying technology to solve life's problems. Life, Marcel wrote in Being and Having, cannot be fixed the way you fix a flat tire. Any fix, any technique, is itself a product of that same problematic world, and is therefore problematic, and compromised. Marcel's admonition is often summarized in a single memorable phrase: "Life is not a problem to be solved, but a mystery to be lived." Despite that warning, seventy years later, artificial intelligence is the most powerful expression yet of humans' urge to solve or improve upon human life with computers. But what are these computer systems? As Marcel would have urged, one must ask where they come from, whether they embody the very problems they would purport to solve. Ethics in AI is essentially questioning, constantly investigating, and never taking for granted the technologies that are being rapidly imposed upon human life. That questioning is made all the more urgent because of scale. AI systems are reaching tremendous size in terms of the compute power they require, and the data they consume. And their prevalence in society, both in the scale of their deployment and the level of responsibility they assume, dwarfs the presence of computing in the PC and Internet eras. At the same time, increasing scale means many aspects of the technology, especially in its deep learning form, escape the comprehension of even the most experienced practitioners. Ethical concerns range from the esoteric, such as who is the author of an AI-created work of art; to the very real and very disturbing matter of surveillance in the hands of military authorities who can use the tools with impunity to capture and kill their fellow citizens. Somewhere in the questioning is a sliver of hope that with the right guidance, AI can help solve some of the world's biggest problems. The same technology that may propel bias can reveal bias in hiring decisions. The same technology that is a power hog can potentially contribute answers to slow or even reverse global warming. The risks of AI at the present moment arguably outweigh the benefits, but the potential benefits are large and worth pursuing. As Margaret Mitchell, formerly co-lead of Ethical AI at Google, has elegantly encapsulated, the key question is, "what could AI do to bring about a better society?" Mitchell's question would be interesting on any given day, but it comes within a context that has added urgency to the discussion. Mitchell's words come from a letter she wrote and posted on Google Drive following the departure of her co-lead, Timnit Gebru, in December.


Google is poisoning its reputation with AI researchers

#artificialintelligence

Google has worked for years to position itself as a responsible steward of AI. Its research lab hires respected academics, publishes groundbreaking papers, and steers the agenda at the field's biggest conferences. But now its reputation has been badly, perhaps irreversibly damaged, just as the company is struggling to put a politically palatable face on its empire of data. The company's decision to fire Timnit Gebru and Margaret Mitchell -- two of its top AI ethics researchers, who happened to be examining the downsides of technology integral to Google's search products -- has triggered waves of protest. Academics have registered their discontent in various ways.

  AI-Alerts: 2021 > 2021-04 > AAAI AI-Alert for Apr 20, 2021 (1.00)
  Industry:

Google offered a professor $60,000, but he turned it down. Here's why

#artificialintelligence

When Luke Stark sought money from Google in November he had no idea he'd be turning down $60,000 from the tech giant in March. Stark, an assistant professor at Western University in Ontario, Canada, studies the social and ethical impacts of artificial intelligence. In late November, he applied for a Google Research Scholar award, a no-strings-attached research grant of up to $60,000 to support professors who are early in their careers. He put in for the award, he said, "because of my sense at the time that Google was building a really strong, potentially industry-leading ethical AI team." Soon after, that feeling began to dissipate.


A researcher turned down a $60k grant from Google because it ousted 2 top AI ethics leaders: 'I don't think this is going to blow over'

#artificialintelligence

In a sign of continued blowback from Google's controversial ousting of two top artificial intelligence leaders, a researcher just publicly turned down a major grant from the company. Late last year, Luke Stark, an assistant professor at the University of Western Ontario researching the social and ethical impacts of artificial intelligence, applied for a Google Research Scholar award. Each year, the company offers grants to early-career professors pursuing topics relevant to Google's fields of interest. Stark applied with plans to put any funding towards his further research into how technology such as mood-tracking apps and facial recognition are used to monitor human emotions. "My impression was that Google was really pulling together a top ethical AI team," he told Insider.


The Departure of 2 Google AI Researchers Spurs More Fallout

#artificialintelligence

Monday morning, some of the world's top minds in robotics and machine learning were due to convene for a virtual, invite-only research workshop hosted by Google. Two academics invited didn't log on as scheduled: They withdrew to protest Google's treatment of two women who've said they were unjustly fired from the company's artificial intelligence research division. A third academic who previously received funding from Google took his own stand, saying he would no longer apply for its support. Although small in scale, the boycott illustrates some of the damage to Google's reputation from the acrimonious departures of Timnit Gebru and Margaret Mitchell, coleaders of a team working to make AI systems more ethical. The controversy has drawn new attention to the influence of tech companies on AI research, and has led researchers inside and outside of Google to ask whether it was distorting research into AI's impact on society.